专利摘要:
in modalities, obtain a plurality of sets of images associated with a geographic region and a period of time, in which each set of images of the plurality of sets of images comprises multispectral and time series images that represent a respective specific portion of the geographical region over the period of time, and predict one or more types of crops growing in each of the specific locations within the specific portion of the geographic region associated with a set of images from the plurality of sets of images. determine a classification of types of cultures for each of the specific locations based on the one or more types of cultures predicted for the respective specific locations, and generate an indicative image of culture comprising at least one image of the multispectral images and time series of the set of overlapping images with indications of the classification of crop types determined for the respective specific locations.
公开号:BR112020014942A2
申请号:R112020014942-0
申请日:2019-01-15
公开日:2020-12-08
发明作者:Cheng-en Guo;Jie Yang;Elliot Grant
申请人:X Development Llc;
IPC主号:
专利说明:

[001] [001] This patent application claims priority for US Provisional Patent Application No. 62 / 620,939, filed on January 23, 2018, the subject of which is incorporated into this document by reference in its entirety. TECHNICAL FIELD
[002] [002] This disclosure refers in general to the detection of image resources and, in particular, but not exclusively, refers to the use of machine learning in the detection of image resources. FUNDAMENTAL INFORMATION
[003] [003] Approximately 11% of the earth's land surface is currently used in agricultural production. Despite the importance of agriculture for human survival, environmental impact, national implications, commercial companies, markets, and the like, there is no consistent, reliable and / or accurate knowledge of what crops are grown within a geographic region, municipality, state, country, continent, across the planet, or portions of any of the above. If more information regarding agricultural fields were known, seed and fertilizer companies, for example, could better determine available markets for their products in different geographic regions; agricultural insurance companies could more accurately and economically assess premiums; banks could more accurately provide agricultural loans; and / or governments could better assess taxes, allocate subsidies,
[004] [004] Insofar as agricultural land mapping data should exist, such data tends to be inconsistent, inaccurate, outdated, and / or otherwise incomplete for many practical uses. For example, a government entity can search or sample a small portion of total agricultural and / or farmland within a geographic region and extrapolate the small data set to approximate field locations, sizes, formats, types of crops, counts, etc. of all agricultural land currently existing within the geographic region. Due to the labor-intensive nature of collecting such data, agricultural land data tends to be updated infrequently (or very rarely for many commercial purposes).
[005] [005] The use of agricultural land tends to vary from region to region over time. Farms tend to be significantly smaller in size in developing countries than in developed countries. Crops can also vary from season to season or from one year to the next for the same field. Agricultural land can be reused for non-agricultural uses (for example, housing developments). Therefore, it would be beneficial to inexpensively, accurately and frequently identify agricultural land at a level sufficiently granular for one or more specific geographic regions and the crop (s) cultivated on the agricultural land. BRIEF DESCRIPTION OF THE DRAWINGS
[006] [006] Non-limiting and non-exhaustive modalities of the invention are described with reference to the following Figures, in which similar reference numerals refer to similar parts from all different views unless otherwise specified. Not all instances of an element are necessarily labeled so as not to disorganize drawings when appropriate. The drawings are not necessarily to scale, but emphasis is placed on the illustration of the principles being described.
[007] [007] Figure 1 represents a block diagram that illustrates a network view of an exemplary system incorporated into the culture type classification technology of the present disclosure, according to some modalities.
[008] [008] Figure 2 represents a flowchart that illustrates an example process that can be implemented by the system of Figure l1, according to some modalities.
[009] [009] Figures 3A-3B represent exemplary images according to the culture type classification technique of the present disclosure, according to some modalities.
[0010] [0010] Figure 4 represents a flowchart that illustrates another example process that can be implemented by the system in Figure 1, according to some modalities.
[0011] [0011] Figure 5 represents a flowchart that illustrates yet another example process that can be implemented by the system of Figure 1, according to some modalities.
[0012] [0012] Figure 6 represents an exemplary device that can be implemented in the system of
[0013] [0013] This document describes modalities of a system, apparatus and method for classifying types of cultures in images. In some embodiments, a method comprises obtaining a plurality of sets of images associated with a geographic region and a period of time, wherein each set of images of the plurality of sets of images comprises multispectral and time series images representing a respective portion specific to the geographic region during the time period; predict one or more types of crops growing in each of the specific locations within the specific portion of the geographical region associated with an image of the plurality of sets of images; determine a classification of crop types for each of the specific locations based on the one or more types of crops planned for the respective specific locations; and generating an indicative image of culture that comprises at least one image of the multispectral images and time series of the set of overlapping images with indications of the classification of types of cultures determined for the respective specific locations.
[0014] [0014] In the description below, numerous specific details are presented to provide a complete understanding of the modalities. A person skilled in the art will recognize, however, that the techniques described in this document can be practiced without one or more of the specific details, or with other methods, components, materials,
[0015] [0015] The reference throughout this specification to "a modality" means that a specific feature, structure or feature described in connection with the modality is included in at least one modality of the present invention. Therefore, the appearances of the phrase “in one modality” in different places throughout this specification are not necessarily all referring to the same modality. In addition, “particular resources, structures or characteristics can be combined in any suitable way in one or more modalities.
[0016] [0016] Figure 1 represents a block diagram that illustrates a network view of an exemplary system 100 incorporated into the culture type classification technology of the present disclosure, according to some modalities. System 100 may include a network 102, a server 104, a database 106, a server 108, a database 110, a device 112 and an aerial image capture device 116. One or more of server 104, from database 106, server 108, database 110, device 112 and aerial image capture device 116 can communicate with network 102. At least server 108 can include cultures of the present disclosure to facilitate automatic identification of the type (s) of culture (s) in images at a sub-meter resolution, as described in more detail below.
[0017] [0017] Network 102 may comprise one or more wired and / or wireless communications networks. Network 102 may include one or more network elements (not shown) to physically and / or logically connect devices for exchanging data with each other. In some embodiments, network 102 may be the Internet, a wide area network (WAN), a personal area network (PAN), a local area network (LAN), a campus area network (CAN), a metropolitan area network (MAN), a virtual local area network (VLAN), a cellular network, a carrier network, a WiFi network, a WiMax network, and / or the like. Additionally, in some embodiments, network 102 may be a private, public and / or secure network, which can be used by a single entity (for example, a company, school, government agency, domestic, personal and the like). Although not shown, network 102 may include, without limitation, servers, databases, switches, routers, portals, base stations, repeaters, software, firmware, intermediate servers and / or other components to facilitate communication.
[0018] [0018] Server 104 may comprise one or more computers, processors, cellular infrastructure, network infrastructure, return infrastructure, host servers, servers, workstations, personal computers, general purpose computers, portable computers, Internet devices, handheld devices, wireless devices, Internet of Things (IoT) devices, portable and / or similar devices configured to facilitate collection, management and / or storage of serial images of terrestrial surfaces in one or more resolutions (also referred to as surface images terrestrial, terrestrial images, set of images or images). For example, server 104 can command device 116 to take images of one or more specific geographic regions, to traverse a specific orbit, to take images with a specific resolution, to take images with a specific frequency, to take images of a region geographic location in a specific and / or similar time period. As another example, server 104 can communicate with device 116 to receive images acquired by device 116. In yet another example, server 104 can be configured to obtain / receive images with relevant associated culture information included (for example, identification of crop type, crop boundaries, identified road locations and / or other annotated information) from government sources, users (for example, such as user 114) and the like. As will be discussed in detail below, images with relevant associated crop information included may comprise labeled human images, United States Department of Agriculture (USDA) agricultural land data layer (CDL) data, Common Land Units data ( United States Agricultural Services Agency (FSA), true field and / or similar data.
[0019] [0019] Server 104 can communicate with device 116 directly with each other and / or through the network
[0020] [0020] Database 106 may comprise one or more storage devices for storing data and / or instructions for use by server 104, device 112, server 108 and / or database 110. For example, database 106 can include images and associated metadata provided by device 116. The contents of database 106 can be accessed via network 102 and / or directly through server 104. The contents of database 106 can be arranged in a structural format to facilitate selective recovery. In some embodiments, database 106 may be included within the server
[0021] [0021] Server 108 may comprise one or more computers, processors, cellular infrastructure, network infrastructure, return infrastructure, host servers, servers, workstations, personal computers, general purpose computers, portable computers, Internet devices, handheld devices, wireless devices, Internet of Things (ToT) devices, portable devices and / or the like configured to implement one or more features of the crop type classification technology of the present disclosure, according to some modalities. Server 108 can be configured to use images and possible associated data provided by server 104 / database 106 to train and generate a machine-based model that is able to automatically detect crop boundaries and classify crop type (s) (s) within the limits of cultures in a plurality of images of land surfaces. Classification of crop types can be at a sub-level of granularity or soil resolution. The “trained” model based on machine learning can be configured to identify crop boundaries and classify crop types in images not supervised by humans. The model can be trained by implementing supervised machine learning techniques. Server 108 can also facilitate access to and / or use of images with the classification of crop types.
[0022] [0022] Server 108 can communicate with one or more of server 104, database 110 and / or device 112 directly or via network 102. In some embodiments, server 108 can also communicate with the device 116 to facilitate one or more functions as described above in connection with server 104. In some embodiments, server 108 may include one or more web servers, one or more application servers, one or more intermediate servers and / or the like.
[0023] [0023] Server 108 may include hardware, firmware, circuits, software and / or combinations thereof to facilitate various aspects of the techniques described in this document. In some embodiments, server 108 may include, without limitation, image filtering logic 120, crop type forecasting logic 128 and crop limit detection logic 130. As will be described in detail below, the filtering logic for images 120 can be configured to apply one or more filtering, “cleaning” or noise-reduction techniques to images to remove artifacts and other unwanted data from the images. The crop type forecasting logic 122 can be configured to predict the growth of crop type (s) within each of the crop areas defined by crop boundaries. The crop type forecasting logic 122 can comprise at least a portion of the “trained” model based on machine learning. Training logic 124 can be configured to facilitate supervised learning, training and / or refinement of one or more machine learning techniques to generate / configure crop type forecasting logic 122. Alternatively, training logic 124 can be configured to support unsupervised learning, semisupervised learning, enhanced learning, computer vision techniques and / or the like.
[0024] [0024] The crop type classification logic 126 can be configured to classify or identify crop type (s) within each crop area associated with a crop limit based on the type (s) of crop (s) predicted by the crop type forecasting logic 122. Post-detection logic 128 can be configured to perform one or more post-classification activities for crop types such as, but not limited to, determine crop yields for different crop types, determine crop management practices / strategies, assign a unique identifier to each crop field (or crop subfield) associated with a detected crop boundary, provide field research capabilities (or subfields) of crops and / or the like.
[0025] [0025] The crop limit detection logic 130 can be configured to detect crop limits within images. In some embodiments, the crop limit detection logic 130 can be used to generate at least a portion of the true field data. In addition to, or alternatively, the crop limit detection logic 130 may comprise a portion of the “trained” model based on machine learning to perform crop type classification, in which the “trained” model detects crop limits (from in order to identify the crop areas / fields / subfields) and then the crops located within those crop areas / fields / subfields are classified by their crop type (s). As with crop type forecasting logic 122, training logic 124 can be configured to facilitate supervised learning, training and / or refinement of one or more machine learning techniques to generate / configure boundary detection logic. of cultures 130.
[0026] [0026] IN some modalities, one or more of the logic 120-130 (or a portion of those) can be implemented as software that comprises one or more instructions to be executed by one or more processors included in server 108. In alternative modalities, one or more of the 120-130 logic (or a portion thereof) can be implemented as firmware or hardware such as, but not limited to, an application specific integrated circuit (ASIC), programmable matrix logic (PAL), programmable port matrix (FPGA) and the like included in server 108. In other embodiments, one or more of the logic 120-130 (or a portion of those) can be implemented as software while others of the logic 120-130 (or a portion of those) can be implemented implemented as firmware and / or software.
[0027] [0027] Although server 108 can be represented as a single device in Figure 1, it is contemplated that server 108 can comprise one or more servers and / or one or more of the logic 120-130 can be distributed across a plurality of devices. In some modalities, depending on computing resources or limitations, one or more of the 120-130 logics can be implemented in a plurality of instances.
[0028] [0028] Database 110 may comprise one or more storage devices for storing data and / or instructions for use by server 108, device 112, server 104 and / or database 110. For example, database 110 can include images provided by server 104 / database 106 / device 116, true field data used to build and / or train crop type forecasting logic 122, crop type heat maps generated by crop forecasting logic crop types 122, crop type classifications generated by crop type classification logic 126, identifiers and other associated image and / or crop type information, data to be used by any of logic 120-130, data generated by any of logic 1120-130, data to be accessed by user 114 via device 112 and / or data to be provided by user 114 via device 112. The contents of database 110 may be arranged in a structured format to facilitate selective recovery. In some embodiments, database 110 may comprise more than one database. In some embodiments, database 110 may be included within server 108.
[0029] [0029] Device 112 may comprise one or more computers, workstations, personal computers general purpose computers, portable computers, Internet devices, handheld devices, wireless devices, Internet of Things (IoT) devices, portable devices, smart phones, tablets and / or similar. In some embodiments, user 114 can interface with device 112 to provide data to be used by one or more of logic 120-130 (for example, manual identification of crop limits and crop types in selected images to serve as data field data) and / or to request data associated with the types of classified crops (for example, searching for a specific field (or subfield) of culture, requesting visual display of specific images overlaid with culture type information) at least the logic training 124 and post-detection logic 128 can facilitate functions associated with device 112. User 114 providing data for use in classifying crop types may be the same or different from a user requesting data that has been generated accordingly. performance of the crop type classification model.
[0030] [0030] Device 116 may comprise one or more satellites, airplanes, drones, hot air balloons and / or other devices capable of capturing a plurality of aerial or elevated photographs of terrestrial surfaces. The plurality of aerial photographs can comprise a plurality of multispectral images, of time series. Device 116 may include one or more location tracking mechanisms (for example, global positioning system (GPS)), multispectral imaging mechanisms (all frequency bands), weather detection mechanisms, stamp generation mechanisms date and time, mechanism to detect the distance from the Earth's surface and / or associated image metadata generation capabilities to provide associated image information for each image of the plurality of captured images. Device 116 can be operated manually and / or automatically, and captured images can be provided via a wired or wireless connection to server 104, server 108 or other devices. Device 116 can also be deployed over the same locations a plurality of times over a specific period of time in order to capture images of time series from the same location. Examples of images (associated with true field data or for which automatic classification of crop types may be desired) that can be provided by or generated from images provided by device 116 include, without limitation, Landsat 7 satellite images, Landsat 8 satellite images, Google Earth images and / or similar.
[0032] [0032] Figure 2 represents a flowchart illustrating an example process 200 that can be implemented by system 100 to generate a crop type classification model, perform crop type classification using the generated crop type classification model, and various uses of crop type classification information, according to some modalities.
[0033] [0033] In block 202, training logic 124 can be configured to obtain or receive true field data that comprise a plurality of surface terrestrial images with identified crop limits (or corresponding crop areas) and crop types classified in those. The plurality of images that comprise the true field data can be selected to encompass those that have a variety of terrestrial resources, crop limits, crop types and the like in order to train / generate a detection model capable of handling a variety of land resources, crop limits or types of crops that may be present in unknown images to be classified.
[0034] [0034] In some embodiments, the plurality of images may comprise images that contain multispectral data (for example, red green blue spectrum (RGB), visible spectrum, near infrared (NIR), vegetative index of normalized difference (NDVI), infrared ( IR), all spectral bands or similar) (also referred to as multispectral images or figures). The plurality of images can also comprise images of time series, in which the same geographic location can be seen a plurality of times during a specific period of time. The specific time period may comprise, without limitation, a growing season (for example, May to October), a year, a plurality of years, years 2008 to 2016 and / or other predetermined times. The frequency of the images can be hourly, daily, weekly, biweekly, monthly, seasonal, annual or similar. Images associated with a specific geographic location and, optionally, a specific period of time, can be referred to as a set of images. A plurality of sets of images can be included in the actual field data.
[0035] [0035] True field data can comprise, but are not limited to: (1) images with identified crop limits (or crop areas) - such images can be identified manually by users and / or the results of automatic limit detection cultures (an example of which is described in Figure 3); (2) images with classified / identified / specified crop types - such images can be manually identified by users and / or obtained from available government or public sources; and / or (3) images with both crop limits and identified crop types - such images can be manually identified by users and / or obtained from available government or public sources.
[0036] [0036] Image resources that are manually identified by users can also be referred to as human labeled data or human labeled images. One or more users, such as user 114, can annotate selected images using a graphical user interface (GUI) mechanism provided on device 112, for example. Images with crop limits and / or identified crop types obtained from available government or public sources can provide such identification at a lower resolution or accuracy of the soil that can be provided by the crop type classification scheme of the present disclosure. For example, the ground resolution can be a resolution of 30 meters, greater than a resolution of a meter or similar. Crop limits and / or type identification from available government or public sources can also be provided as farm reports, sample-based data, research-based data, extrapolations and / or the like. An example of available government / public data on geo-identified limits and crop types can be CDL USDA data for the years 2008-2016 with a resolution of 30 meters per pixel (soil).
[0037] [0037] Training logic 124 can facilitate the selection of images, presentation of selected images for human labeling, use of labeled images, obtaining data on crop limits and / or types of identified cultures available from government / public and / or similar . True field data can also be referred to as training data, model building data, model training data and the like.
[0038] [0038] IN some modalities, the time period and / or country (s) geographic region (s) associated with the actual field data may be the same (or approximately the same) as the time period and / or the geographic region (s) associated with the images for which the types of crops should be identified (in block 216) .For example, for images taken during the years 2008 to 2016 to be posted on practice in block 216, CLU data from the year 2008 can be used, CDL data from the years 2008-2016 can be used, and human labeled data can comprise images taken during 2008 to 2016. CLU and CDL data can understand United States image data and images in human labeled data can also comprise United States images.
[0039] [0039] Then, in block 204, the image filtering logic 120 can be configured to perform preliminary filtering of one or more images that comprise the true field data. In some embodiments, preliminary filtration may comprise monitoring of clouds, shadows, fog, fog, atmospheric obstructions and / or other surface terrestrial obstructions included in the images on a per pixel basis. On a per pixel basis, if such an obstruction is detected, then the image filtering logic 120 can be configured to determine if it addresses the obstruction, how to correct the obstruction, if the image information associated with the pixel of interest is omitted to build the model in block 206 and / or the like. For example, if a first pixel does not include terrestrial surface information due to a cloud, but a geographic location associated with a second pixel adjacent to the first pixel has an image because it is not obscured by a cloud, then the image filtering logic 120 can be configured to change the value of the first pixel by the value of the second pixel. As another example, pixel values in a given image can be replaced with corresponding pixel values in another image within the same set of images (for example, from a different image in the same time series for the same geographic location). In other embodiments, block 204 may be optional if, for example, the images are known to be free from clouds and otherwise free from atmospheric obstruction.
[0040] [0040] With the true field data obtained and optionally filtered or preliminarily corrected, the resulting true field data can be applied to one or more systems / machine learning techniques to generate or build a crop type model, in block 206. In some modalities, the crop type model may comprise the crop type forecasting logic 122. The machine learning system / technique may comprise, for example, a conventional neural network (CNN) or supervised learning. The crop type model can be configured to provide a probabilistic forecast of one or more crop type classifications for each pixel corresponding to a geographic location associated with a set of images provided as input. Crop types may comprise, but are not limited to, rice, wheat, corn, soy, sorghum, legumes, fruits, vegetables, oilseeds, nuts, pasture and / or the like.
[0041] [0041] Since true field data comprises images with precisely identified crop and crop boundaries, the machine learning system / technique can learn which terrestrial surface resources in images are indicative of crop areas and what type (s) crop is being grown within those crop areas. Such knowledge, when sufficiently detailed and accurate, can then be used to automatically identify crop types in images for which crop types may be unknown.
[0042] [0042] For the purpose of making a forecast of the type (s) of culture (s) within a culture area, a forecast of the existence of the culture area can be involved so that at least the portions of the image to be be analyzed to make predictions of crop types can be reduced or minimized. Consequently, in some modalities, the crop boundary detection logic 130 together with the crop type forecasting logic 122 can be considered part of the crop type model. The crop boundary detection logic 130 is discussed in connection with Figure 3. The crop type model can also be referred to as a crop type classification model.
[0043] [0043] In some modalities, the type of crop model may be associated with a specific geographical region, the same geographical region captured in the images that comprise the true field data. For example, the crop type model may be specific to a specific municipality within the United States. Similarly, the crop type model may also be associated with a specific period of time, the same period of time associated with the images that comprise the true field data. As the geographic region gets larger, data inconsistencies or regional differences may arise, which can result in a less accurate crop type model.
[0044] [0044] EM, training logic 124 can then be configured to determine whether the accuracy of the crop type model is equal to or exceeds a predetermined limit. The predetermined threshold can be 70%, 80%, 85%, 90% or similar. If the accuracy of the model is less than the predetermined limit (“no” branch of block 208), then process 200 can return to block 202 to obtain / receive additional true field data to apply to machine learning systems / techniques for refine the current model of crop types. Providing additional true field data to machine learning systems / techniques comprises providing additional supervised learning data so that the crop type model can be better configured to predict what crop type (s) is (are) grown / was grown in a culture area. One or more iterations of 202-208 blocks can occur until a sufficiently accurate crop type model can be constructed.
[0045] [0045] If the accuracy of the model is equal to or exceeds the predetermined limit (“yes” branch of block 208), then the crop type model can be considered to be acceptable for use in automatic or unsupervised classification of crop types for images in which crop types (and crop limits) are unknown. In block 210, a plurality of images can be obtained or received to be applied to the crop type model for automatic classification. The plurality of images can be those captured by device 116.
[0046] [0046] In some embodiments, the plurality of images may comprise a plurality of sets of images, in which each set of images of the plurality of sets of images may be associated with a respective portion / area (for example, a United States municipality a plurality of portions / areas (for example, all counties in the United States) that collectively comprise a geographic region (for example, the United States) for which types of crops from all fields / sub-fields of crops located in that area may be For each portion / area of the plurality of portions / areas, the associated set of images may comprise: (1) at least one image for each of a plurality of time points (for example, May 1st, 1st June, July 1, August 1, September 1 and October 1) and (2) for a respective time point of the plurality of time points, there may also be one or more images, in which each image can provide specific / different spectral information from another image taken at the same time point (for example, a first image taken on May 1 comprises an RGB image, a second image taken on May 1 comprises a NIR image, a third image taken on May 1 comprises an NDVI image, etc.).
[0047] [0047] The total geographic region covered by the plurality of images can be the same (or approximately the same) geographic region associated with the images used in block 202 to generate the crop type model. In other words, the crop type model generated in block 206 could have been developed specifically for use in the images in block 210. Such a crop type model can also be referred to as a local or localized crop type model. The plurality of images obtained in block 210 can also be associated with the same time period as the time period of the crop type model. Continuing the example above, the crop type model generated in block 206 may be associated with the United States and the years 2008-2016 (since the images used to train and build the model were images of the United States taken during the years 2008 -2016) and the plurality of images in block 210 may similarly be images of the United States taken during the years 2008-2016.
[0048] [0048] Each image within a set of images can represent the same terrestrial location (in the same orientation and at the same distance from the surface) except that the images differ in multispectral and / or time series content. Therefore, each image within the image set can be the “same” image except that the terrestrial surface resources may differ over different times and / or different spectrum / color composition schemes. In some embodiments, images within the image sets that comprise the true field data in block 202 may have similar characteristics.
[0049] [0049] The images in block 210 can then be preliminarily filtered by the image filtering logic 120, in block 212. In some modalities, block 212 may be similar to block 204, except that the images put into practice are those of the block 210 instead of those in block 202. In other modalities, if images were taken (or resumed, if necessary) to ensure that clouds and other obstructions are not present in the images, then block 212 may be optional.
[0050] [0050] EM then in block 214, the crop type forecasting logic 122 (with assistance from the crop limit detection logic 130, in some modalities) can be configured to determine a crop type heat map for each set of (filtered) images of the plurality of sets of images obtained in block 210. For each set of images of the plurality of sets of images, the set of images can be provided as input to the crop type model generated in block 206, and in response, the crop type model can provide a prediction / determination of the crop type (s) within each crop area on a per pixel or per crop area basis. of the heat map can indicate the relative or absolute probability of the specific culture type (s) In some modalities, the heat map can be vectorized in a grid format.
[0051] [0051] A single crop area may have one or more of a predicted crop type. If the culture type heat map is presented visually, each culture type of a plurality of culture types can be assigned a different color from one another and the intensity / shade of a specific color superimposed on the image can indicate the probability statistical accuracy of forecasting crop types, for example. As another example, the type of culture and / or expected intensity / precision can be expressed as text in the image.
[0052] [0052] Multispectral and time series images that comprise a set of images for the same geographic area may allow the detection of changes in specific terrestrial surface resources over time, which facilitates the determination of whether a specific area is more likely to be a culture area and what culture (s) is likely to be cultivated within the culture area. For example, crop colors can change over the course of a growing season. Cultivation fields before planting, during the growing season and after harvest may look different from each other. Specific patterns of crop color changes may indicate a type of crop being grown (for example, wheat, soy, corn, etc.). When a crop is planted and / or harvested it can indicate the type of crop being grown. If a first type of crop is grown in a given crop field in a first year and then a second type of crop different from the first crop is grown in the same crop field in a second year, the changes detected between the two years may indicate that the geographical location associated with that field of culture may be an area of culture. Different types of crops may have different characteristics of the planting pattern (for example, the distance between adjacent rows of plantations may differ for different types of crops).
[0053] [0053] Then, in block 216, the crop type classification logic 126 can be configured to classify crop types for crop areas based on the crop type heat map, for each set of images of the plurality of sets of images from block 210. In some modalities, if more than one type of culture is provided for a given culture area, a majority voting rule can be applied in which the type of culture most likely among the types of culture Predicted crops can be selected as the type of crop for a given crop area. If there is no predicted dominant majority type of culture (for example, a type of culture is predicted with 70% or greater probability), then the given culture area can be divided into a plurality of culture sub-areas with each of the sub-areas of culture assigned to a respective type of culture from the plurality of types of culture predicted for the given culture area. For example, if a given crop area has a first crop type forecast with 30% probability, a second crop type forecast with 40% probability and a third crop type forecast with 30% probability, then the margin of error in the probabilities may be such that no prediction of the dominant culture type can exist. In this case, the given culture area can be subdivided into a first, second and third subarea and designated first, second and third types of culture, respectively.
[0054] [0054] IN “alternative modalities, supplementary knowledge can be used with the crop type heat map to make a final classification of crop types for the crop areas. For example, if certain types of crops cannot or cannot grow in the same geographical location at the same time, then if such incompatible types of crops are predicted for the same crop area, the one or more of the crop types that are less likely to be grown in the geographical location can be ignored.
[0055] [0055] Crop limits associated with each crop area / field / subfield can be determined Or identified with a sub-meter resolution (soil), a resolution of approximately 0.15 to 0.2 meter, a resolution less than 0.5 meter, a resolution less than approximately 0.2 meter and the like. By extension, the classification of crop types for each crop area / field / subfield can also be considered to be classified with a sub-meter resolution, a resolution of approximately 0.15 to 0.2 meter, a resolution less than 0, 5 meter, a resolution less than approximately 0.2 meter and the like.
[0056] [0056] In some modalities, at least some of the images in a set of images, associated with a specific portion of the global geographic region of interest (for example, images obtained in block 210), may have different resolutions from each other and / or resolution lower than the resolution associated with the classification of crop types issued by the crop type classification logic 126. For example, outputs that comprise crop types can be classified with a soil resolution of less than one meter (less than one meter per pixel) or 0.1 meter (0.1 meter per pixel) even though at least some of the images provided as inputs have a ground resolution of 5 meters.
[0057] [0057] Crop boundaries can define areas of closed forms. Crop boundaries may comprise crop field boundaries or, in the presence of sufficient information in the set of images and / or prior knowledge, boundaries of crop subfields. Crop field boundaries can define a crop field, which can comprise a physical area outlined by fences, permanent waterways, woods, roads and the like. A culture subfield may comprise a subset of a culture field, in which a portion of the physical area of the culture field contains predominantly a specific type of culture that is different from a type of culture prevalent in another portion of the physical area of the culture field. culture. Each of the portions of different types of cultures in the physical area can be considered to be a culture subfield. Therefore, a culture field can contain one or more culture subfields. For example, a crop field may include a first corn subfield and a second soybean subfield.
[0058] [0058] In some modalities, the crop type heat map provided by the crop types forecast logic 122 can indicate the probability of crop type (s) for each crop area, while the classification logic crop types 126 can be configured to make a final determination as to which pixels associated with a crop area should be assigned to which crop type among the crop type (s) predicted for the crop. culture area.
[0059] [0059] With crop types classified up to a level of crop subfields for all sets of images, process 200 can proceed to block 218 in which post-detection logic 128 can be configured to perform one or more activities post-detection according to the crop / crop subfields classified for all image sets (for example, for the global geographic region). For each field / subfield of culture with the type of culture classified, post-detection activities may include, without limitation, calculation of the area of the field / subfield of culture, assigning a unique identifier to the field / subfield of culture (for example, a single computer generated identification number (GUID) that will never be reused in another culture field / subfield, classification of the culture field / subfield within a classification system (e.g. the culture field / subfield can be classified, designated, labeled or associated with a specific continent, country, state, municipality and similar), and / or generation of associated metadata for use in storage, retrieval, research and / or updating activities. In some modalities, post-detection activities may also include overlapping indications of identified crop fields / subfields and crop types on the original images in order to visually present the results of classification of crop types, and otherwise visually increase the results. original images with detected information. Data resulting from post-detection activities can be kept in database 110.
[0060] [0060] IN some modalities, for each set of images, the post-detection logic 128 can be configured to generate a new image (also referred to as an indicative image of culture) that represents the original image (for example, at least one image of the plurality of images that comprise the set of images) overlaid with indicators of the type (s) of culture (s) determined. The image 340 shown in Figure 3A is an example of the new image.
[0061] [0061] Figure 3A represents several images that can be used or generated during the classification of types of cultures of the present revelation, according to some modalities. Image 300 comprises an example of true low resolution field data. Image 300 may have a resolution of 30 meters / pixel, an example of USDA CDL data image and / or similar. Image 300 may include indications 302 and 304 indicative of crop area locations and, optionally, crop type (s) for crop areas. Since the image 300 comprises a low resolution image, locations of crop areas and classification of crop types for specific locations can be approximated at best.
[0062] [0062] Image 310 may comprise an example of an image of a set of images from the plurality of sets of images (for example, an image obtained / received in block 210). Image 310 may comprise a high resolution image with a resolution of, for example, 0.15 meter / pixel, acquired on an annual and / or similar basis. In some modalities, images 300 and 310 can be associated with the same geographic location. The images 330 may also comprise examples of images from the image set of the plurality of image sets. Images 310 and 330 can comprise images from the same set of images. Images 330 may comprise examples of low resolution images, time series, acquired on a monthly and / or similar basis.
[0063] [0063] As described above, in the course of carrying out the classification of crop types, crop limits can be identified. Image 320 represents a visual illustration of crop limits that can be identified in image 310. Image 320 can comprise image 310 superimposed with indications of identified crop limits 322, 324, 326 and 328. Image 320 can comprise an image high resolution with a resolution of, for example, 0.15 meter / pixel.
[0064] [0064] With the crop limits identified for the image set, the images in the image set can be additionally used to determine crop type (s) for each of the identified crop limits. Image 340 can comprise images 310 or 320 with indications of classifications of types of cultures for the respective limits of cultures included. The crop types for crop limits 322, 324, 326 328 are "grapes", "corn", "soy" and "grapes", respectively.
[0065] [0065] If visualization, research or other activities involving specific fields / subfields of cultures or types of cultures are performed, such new generated image can be displayed to the user.
[0066] [0066] EM then in block 220, post-detection logic 128 can be configured to determine if the classification of crop types is up to date. An update can be triggered based on the availability of new images (for example, in near real time for potential changes to one or more crop limits, new growing season, etc.), a time / date event (for example, a new year, a new growing season), sufficient time since the last update, some predefined period of time (for example, periodically, weekly, biweekly, monthly, seasonally, annually, etc.) and / or similar. If an update has to be performed (block “yes” of block 220), then process 200 can return to block 210. If no update is to be performed (branch “no” of block 220), then process 200 can proceed for blocks 222, 224 and 226.
[0067] [0067] In block 222, the post-detection logic 128 can be configured to provide visualization and data search functions for crop types. Application programming interfaces (APIs), web sites, applications and / or the like can be implemented for users to access various types of crop data. For example, users can search all crop areas classified in a specific crop type, crop areas within a specific country classified in a specific crop type, size of crop areas by crop types, crop yield for different crops crop types, crop management practices for different crop types, or any other research parameters. Superimposed images with indications of crop limits and crop types can be displayed to users. Users can perform surveys and view data on crop types using the 112 device, for example.
[0068] [0068] In block 224, the post-detection logic 128 can be configured to facilitate the acceptance of changes in the classification of crop types of specific fields / subfields that have been automatically identified by authorized users. The farmer who planted the crops in a specific crop field / subfield can report that the type of crop in the database for that crop field / subfield is incorrect or incomplete and can manually label images with the type (s) correct culture (s). The modifications provided, which can be submitted for approval, can then be used to update the database 110. The modifications provided can also be used as true field data to refine the crop type model.
[0069] [0069] The classifications of types of crops determined can be extensible for a variety of uses. In block 226, post-detection logic 128 can be configured to perform one or more of the following based on classifications of crop types and / or crop characteristics detected during the course of class classification: estimate crop yield by type of culture (for example, by type of culture, by municipality, by type of culture and municipality, by type of culture and country, etc.); determine crop management practices by crop type (for example, estimate harvest date, determine when to apply fertilizer, determine type of fertilizer to apply); diagnose crop diseases; control or cure crop diseases; identify different cultivars within the types of crops; determine crop attributes (for example, based on the direction of planted crops); and the like.
[0070] [0070] Figure 3B represents an exemplary presentation of crop yield estimates calculated from crop type classification data, according to some modalities. Image 350 can comprise the same image as image 340 supplemented with crop yield estimates for each culture field / subfield. As shown, corn yield is higher than grape or soybean yield on an acre basis. If similar estimates are calculated for each of the different crop types for all crop fields / subfields within a geographic region (for example, the United States), then aggregate crop production for each crop type can be known.
[0071] [0071] In this way, a complete database of crop fields / subfields (or crop limits) with crop types classified for a given geographic region (for example, municipality, state, country, continent, planet) can be automatically generated, which is granular for a sub-meter resolution, and which can be kept updated over time with minimal supervision. For a plurality of geographic regions, considering that true field data exists for respective geographic regions of the plurality of geographic regions, process 200 can be performed for each of the plurality of geographic regions.
[0072] [0072] Figure 4 represents a flowchart illustrating an exemplary process 400 that can be implemented by system 100 to automatically detect crop boundaries (and correspondingly, crop areas / fields / subfields) in images, according to some modalities. The crop limits detected in block 416 of Figure 4 can comprise the crop limit detection results mentioned above for the true field data in block 202 of Figure 2. In some embodiments, the crop limit detection performed by the crop model crop types in the course of generating the crop type heat map comprises at least blocks 414 and 416 of Figure 4.
[0073] [0073] In block 402, training logic 124 can be configured to obtain or receive true field data that comprise a plurality of surface terrestrial images with identified crop limits. The plurality of images that comprise the true field data can be selected to encompass those that have a variety of terrestrial resources, crop boundaries and the like in order to train / generate a detection model capable of handling different terrestrial resources and crop boundaries that may be present in images that undergo detection. In some embodiments, the plurality of images may be similar to those discussed above for block 202 of Figure 2, except that crop limits are identified instead of classified crop types.
[0074] [0074] In some modalities, true field data for detection of crop boundaries may comprise existing images with identified crop boundaries (or crop areas) in which crop boundaries (or crop areas) can be identified with a resolution low (ground) (for example, greater than a meter resolution, a resolution of 3 to 250 meters, a resolution of 30 meters, etc.). Such images can be of high frequency, such as daily or biweekly update rate. Since the identification of crop boundaries is of low resolution, such identification can be considered “noisy”, approximate or inaccurate. Examples of existing images with identified low-resolution crop limits may include, without limitation, CDL USDA data, CLU FSA data, government data collected, sample-based data or surveys, farmer reports and / or the like. Existing images with identified crop limits can be obtained by server 104, stored in database 106, and / or provided to server 108.
[0075] [0075] In some modalities, true field data may comprise CDL and CLU data (as discussed above) and / or labeled human data. Human labeled data can comprise crop boundaries in images that are manually identified, labeled, or annotated by, for example, user 114 using a graphical user interface (GUI) mechanism provided on device 112. Such manual annotation may be the a higher (solo) resolution that may be associated with CDL and / or CLU data. Images that are manually labeled can be obtained from device 116, for example. Training logic 124 can facilitate the selection of images, presentation of selected images, use of labeled human and / or similar images. True field data can also be referred to as training data, model building data, model training data and the like.
[0076] [0076] IN some modalities, the time period and / or country (s) geographic region (s) associated with the actual field data may be the same (or approximately the same) as the time period and / or the geographic region (s) associated with the images for which the crop boundaries are to be detected (in block 216) .For example, for images taken during the years 2008-2016 to be placed in practice in block 216, CLU data for the year 2008 can be used, CDL data for the years 2008-2016 can be used, and human labeled data can comprise images taken during 2008 to
[0077] [0077] Then, in block 404, the image filtering logic 120 can be configured to perform preliminary filtering of one or more images that comprise the true field data. In some embodiments, preliminary filtration may comprise monitoring of clouds, shadows, fog, fog, atmospheric obstructions and / or other surface terrestrial obstructions included in the images on a per pixel basis. Block 404 may be similar to block 204, except that the filtered images are the images that comprise the true field data of block 402.
[0078] [0078] With the actual field data obtained and, optionally, preliminarily filtered or corrected, the resulting true field data can be applied to one or more systems / machine learning techniques to generate or build a culture / non-culture model , in block 406. The machine learning system / technique can comprise, for example, a convolutional neural network (CNN) or supervised learning system. The culture / non-culture model can be configured to provide a probabilistic culture or non-culture forecast for each pixel corresponding to a specific geographic location associated with a set of images provided as input. The culture / non-culture model can comprise a portion of the crop boundary detection logic 130. Since true field data comprises accurately identified crop boundary images, the machine learning system / technique can learn what features terrestrial surface in the images are indicative of cultures or non-cultures. Such knowledge, when sufficiently detailed and accurate, can then be used to automatically identify crop boundaries in images for which crop boundaries may be unknown.
[0079] [0079] In some modalities, the culture / non-culture model may be associated with a specific geographic region, the same geographic region captured in the images that comprise the true field data. For example, the culture / non-culture model may be specific to a specific municipality within the United States. Similarly, the culture / non-culture model can also be associated with a specific period of time, the same period of time associated with images that comprise “true field” data. As the geographic region gets larger, data inconsistencies or regional differences can occur, which can result in a less accurate culture / non-culture model.
[0080] [0080] EM, training logic 124 can then be configured to determine whether the accuracy of the culture / non-culture model matches or exceeds a predetermined limit. The predetermined threshold can be 70%, 80%, 85%, 90% or similar. If the accuracy of the model is less than the predetermined limit (“no” branch of block 408), then process 400 can return to block 402 to obtain / receive additional true field data to apply to machine learning systems / techniques for refine the current culture / non-culture model. Providing additional true field data to machine learning systems / techniques comprises providing additional supervised learning data so that the culture / non-culture model can be better configured to predict whether a pixel represents a culture (or is located within a culture field) or not a culture (or not located within a culture field). One or more iterations of blocks 402-408 can occur until a sufficiently accurate culture / non-culture model can be constructed.
[0081] [0081] If the accuracy of the model equals or exceeds the predetermined limit (“yes” branch of block 408), then the culture / non-culture model can be considered to be acceptable for use in unsupervised or automatic culture / non-culture detection for images in which crop boundaries (or crop fields) are unknown. In block 410, a plurality of images to be applied to the culture / non-culture model for automatic detection can be obtained or received. A plurality of image sets can be obtained, in which each image set of the plurality of image sets is associated with the same (or almost the same) geographic location and time period as the images in block 402. If the detection results of crop boundaries are used as true field data in block 202, the images in block 410, the images in block 402 and the images in block 202 obtained can all be associated with the same (or almost the same) geographic location and period of time. The plurality of images can be those captured by the device 116 images of Landsat 7, images of Landsat 8, images of Google Earth, images of one or more different resolutions and / or images acquired at one or more different frequencies.
[0082] [0082] The images in block 410 can then be preliminarily filtered by the image filtering logic 120, in block 412. In some embodiments, block 412 may be similar to block 404, except that the images put into practice are those of block 410 instead of those in block 402. In other modalities, if images are taken (or resumed, as needed) to ensure that clouds and other obstructions are not present in the images, then block 412 may be optional.
[0083] [0083] EM then in block 414, the crop limit detection logic 130 can be configured to determine a culture / non-culture heat map for each (filtered) image set of the plurality of image sets obtained in block 410 For each set of images of the plurality of sets of images, the set of images can be provided as inputs to the culture / non-culture model generated in block 406, and in response, the culture / non-culture model can provide a prediction / determining whether a culture is represented on a per pixel basis. In other words, the prediction of the presence of a culture (or non-culture) in specific locations within the specific portion of the geographical region associated with a respective set of images. Each pixel on the heat map can indicate the relative or absolute probability of a culture or non-culture. In some modalities, the probabilistic predictions of culture / non-culture provided by the heat map can be indicated by the use of colors, patterns, shadows, tones or other specific indicators overlaid on the original image. For example, a zero probability of a culture being indicated by the absence of an indicator, the highest probability for a culture can be indicated by the darkest or brightest shade of red, and intermediate probabilities can be appropriately graded in color, shade, tone, standard or similar between no indication and the darkest / brightest red color.
[0084] [0084] In block 416, the crop limit detection logic 130 can be configured to determine crop limits based on the culture / non-culture heat map, for each set of images of the plurality of image sets in block 410. In addition to using the culture / non-culture heat map, determining the location of boundaries and cultures may also be in accordance with prior knowledge, application of noise prevention techniques, application of clustering and cultivation techniques for regions and / or similar .
[0085] [0085] In some embodiments, the crop limit detection logic 130 can be configured to use prior knowledge information in determining crop limits. Prior knowledge information may include, without limitation, known locations of roads, waterways, woods, buildings, parking lots, fences, walls and other physical structures; known information about farming or farming practices such as forms of specific boundaries that arise from specific farming / farming practices close to the geographic location associated with the set of images (for example, straight boundaries or circular boundaries in the case of known irrigation use by pivot); types of crops; and / or the like. Noise or filtering techniques can be implemented to determine crop limits and / or to refine crop limits. Applicable anti-noise or filtering techniques may include, without limitation, techniques for preliminarily smoothing certain crop limits (for example, since in the absence of physical barriers, the limits tend to be linear or to follow a geometric shape). Similarly, techniques for grouping and cultivating regions can be used to determine or refine crop boundaries. Techniques for grouping and cultivating unsupervised regions can be used to reclassify scattered pixels from non-culture to culture or vice versa in areas where a few pixels deviate from a significantly larger number of pixels surrounding them. For example, if a few pixels are classified as non-culture within a larger area that is classified as culture, then those few pixels can be reclassified as culture.
[0086] [0086] Crop limits can be determined or identified for a sub-meter resolution (soil), a resolution of approximately 0.15 to 0.2 meter, a resolution less than 0.5 meter, a resolution less than approximately 0, 2 meter and the like. Crop boundaries can define areas of closed forms. Crop boundaries may comprise crop field boundaries or, in the presence of sufficient information in the set of images and / or prior knowledge, boundaries of crop subfields. Crop field boundaries can define a crop field, which can comprise a physical area outlined by fences, permanent waterways, woods, roads and the like. A culture subfield may comprise a subset of a culture field, in which a portion of the physical area of the culture field contains predominantly a specific type of culture that is different from a type of culture prevalent in another portion of the physical area of the culture field. cultures. Each of the portions of different types of culture in the physical area can be considered to be a subfield of culture. Therefore, a crop field can contain one or more crop subfields. For example, a field of crops may include a first subfield of corn and a second subfield of soybean.
[0087] [0087] In some modalities, the crop / non-crop heat map can indicate the probability of crop areas, while the crop boundary detection logic 130 can be configured to make a final determination of which of the pixels indicated as likely to represent cultures on the crop / non-crop heat map comprise crop field (s) or crop subfield (s). The perimeter of a culture field or culture subfield defines the associated field or culture subfield boundary.
[0088] [0088] In this way, the limits of cultures up to the level of subfield of culture can be detected automatically. Such crop limit detection results can be used as true field data in block 202. The crop limit detection technique (or portions thereof) discussed in this document can be included in the crop type model generated in block 206, in some modalities.
[0089] [0089] Figure 5 represents a flowchart illustrating an example process 500 that can be implemented by system 100 to perform crop type classification using an existing crop type classification model and modify the crop type classification model in a basis as needed, according to some modalities. In some embodiments, blocks 502, 504, 506, 508 may be similar to blocks 210, 212, 214, 216 of Figure 2, except that the sets of images for which the classification of crop types are performed may be associated with a geographic region and / or time period that differs from the geographic region and / or the time period associated with the crop type model used in block 506.
[0091] [0091] In some modalities, blocks 5100-512 can be executed simultaneously with, before or after blocks 502-508. Blocks 510, 512 can be similar to the respective blocks 202, 204 of Figure 2. The true field data obtained in block 510 can be associated with the same (or approximately the same) geographic region and / or time period as with the sets block 502. In some embodiments, the amount of true field data in block 510 may differ from the amount of true field data in block 202. A lesser amount of true field data may be available because there may be little or none government / public crop data available for countries outside the United States or for previous years.
[0092] [0092] In block 514, training logic 124 can be configured to assess the accuracy of at least a subset of predicted crop types using the existing crop type model in block 508 by comparison with crop types identified in the actual data (filtered) fields provided in blocks 510, 512. In some embodiments, the type (s) of crop (s) classified for the same (or nearly the same) geographical areas in the two data sets of types of crops identified can be compared to each other.
[0093] [0093] If the precision of the types of crops forecast equals or exceeds a limit (“yes” branch of block 514), then process 500 can proceed to blocks 516-522. The limit may comprise a pre-established limit such as 75%, 80%, 85%, 90% or the like. The existing model of crop types can be considered to be adequate (or sufficiently precise) for the specific geographic region and time period associated with the images of interest in block 502. In some modalities, blocks 516, 518, 520, 522, 524 may be similar to the respective blocks 218, 220, 222, 224, 226 of Figure 2, except the classification of types of crops of interest is that determined in block 508. In block 518, if the classification of types of cultures has to be updated (“yes” branch of block 518), then process 500 can return to block 502. For crop type classification updates, blocks 510, 512 and 514 may not need to be repeated because the model's suitability / accuracy was initially confirmed.
[0094] [0094] If the accuracy of the predicted crop limits is less than a limit (“no” branch of block 514), then process 500 can proceed to block 524. A new crop type model can be generated associated with the same (or almost the same) geographic region and time period as the images obtained in block 502. The new crop type model may include a modification of the existing crop type model or a trained model with only data corresponding to the geographic region and time period that correspond to the images of interest. In block 524, training logic 124 can be configured to generate a new model of crop types based on true field data (filtered from block 512 applied to one or more machine learning systems / techniques. Block 524 can be similar to block 206 of Figure 2.
[0095] [0095] Next, in block 526, the accuracy of the new crop / non-crop model can be assessed. If the accuracy is less than a limit (“no” branch of block 526), then additional true field data can be obtained or received, in block 528, and training / refinement / construction of the new crop type model can continue for return to block 524. If the accuracy equals or exceeds the limit (“yes” branch of block 526), then process 500 can proceed to block 506 to use the new crop type model with the image sets (filtered) block 504 to generate heat maps of crop types associated with the image sets (filtered). In the case where a new crop type model was generated due to insufficient precision of the existing crop type model, blocks 510, 512, 514 may not need to be repeated.
[0096] [0096] In this way, the classification of crop types for crop fields / subfields located in countries outside the United States and / or during periods of time different from recent years can also be determined in a cheap, accurate and automatic way. Current and past crop fields / subfields (as long as aerial image data is available) across the globe can be classified by crop type. Historical aerial imagery, potentially going up to 20 to 40 years ago depending on the availability of aerial imagery, can be applied to the crop type model to retroactively classify crop types in those images. The ability to retroactively classify historical images can facilitate the determination of several trends (for example, in the use of crops, crop yields, etc.).
[0097] [0097] Figure 6 represents an exemplary device that can be implemented in system 100 of the present disclosure, according to some modalities. The device of Figure 6 can comprise at least a portion of any server 104, database 106, server 108, database 110, device 112 and / or device 116. Platform 600 as illustrated includes bus or other means of communication internal 615 to communicate information, and processor 610 coupled to bus 615 to process information. The platform also comprises random access memory (RAM) or another volatile storage device 650 (alternatively referred to in this document as main memory), coupled to the 615 bus to store information and instructions to be executed by the 610 processor. The main memory 650 can also be used to temporarily store variables or other intermediate information while executing instructions by the 610 processor. Platform 600 also comprises read-only memory (ROM) and / or static storage device 620 coupled to the 615 bus to store static information and instructions for the 610 processor, and 625 data storage device such as a magnetic disk, optical disk and its corresponding disk drive, or a portable storage device (for example, a universal serial bus (USB) flash drive, a Secure Digital card (SD)). The data storage device 625 is coupled to the 615 bus to store information and instructions.
[0098] [0098] Platform 600 may also be attached to the display device 670, such as a cathode ray tube (CRT) or liquid crystal display (LCD) attached to the 615 bus via the 665 bus to display information to a user. computer. In modalities where the 600 platform provides computing power and connectivity to a display device created and installed, the display device 670 can display the overlaid images with the information of fields / subfields of cultures as described above. The alphanumeric input device 675, including alphanumeric or other keys, can also be coupled to bus 615 via bus 665 (for example, via infrared (IR) or radio frequency (RF) signals to communicate information and command selections to the processor
[0099] [0099] Another “component, which can optionally be coupled to the 600 platform, is a 690 communication device for accessing other nodes in a distributed system through a network. The communication device 690 can include any of several commercially available network peripheral devices such as those used for coupling to an Ethernet, symbolic ring, Internet, or wide area network. The communication device 690 can also be a null modem connection, or any other mechanism that provides connectivity between platform 600 and the outside world. Note that any or all of the components of this system illustrated in Figure 6 and associated hardware can be used in several forms of disclosure.
[00100] [00100] The processes explained above are described in terms of computer software and hardware. The techniques described may constitute machine-executable instructions embodied within a non-transitory machine-readable (for example, computer) storage medium, which when executed by a machine will cause the machine to perform the described operations. In addition, processes can be embodied within hardware, such as an application-specific integrated circuit (ASIC) or otherwise.
[00101] [00101] A machine-readable tangible storage medium includes any mechanism that provides (for example, store) information in a non-transitory form accessible by a machine (for example, a computer, network device, personal digital assistant, manufacturing tool , any device such as a set of one or more processors, etc.). For example, a machine-readable storage medium includes recordable / non-recordable media (for example, read-only memory (ROM), random access memory (RAM), magnetic disk storage media, optical storage media, storage devices flash memory, etc.).
[00102] [00102] The above description of illustrated modalities of the invention, including what is described in the Summary, is not intended to be exhaustive or to limit the invention to the exact forms disclosed. Although specific embodiments of, and examples for, the invention are described in this document for illustrative purposes, several modifications are possible within the scope of the invention, as those skilled in the relevant art will recognize.
[00103] [00103] These modifications can be made to the invention in the light of the detailed description above. The terms used in the following claims should not be considered to limit the invention to the specific modalities disclosed in the specification. Instead, the scope of the invention must be entirely determined by the following claims, which must be considered in accordance with established claims interpretation doctrines.
权利要求:
Claims (22)
[1]
1. Method, characterized by the fact that it comprises: obtaining a plurality of sets of images associated with a geographic region and a period of time, in which each set of images of the plurality of sets of images comprises multispectral and time series images that represent a respective specific portion of the geographical region during the time period; predict one or more types of crops growing in each of the specific locations within the specific portion of the geographical region associated with a set of images from the plurality of sets of images; determine a classification of crop types for each of the specific locations based on the one or more types of crops planned for the respective specific locations; and generate an indicative image of culture that comprises at least one image of the multispectral images and time series of the set of overlapping images with indications of the classification of types of cultures determined for the respective specific locations.
[2]
2. Method, according to claim 1, characterized by the fact that the prediction of one or more types of crops growing in each of the specific locations comprises: predicting the presence of a culture in the specific locations; determine crop boundary locations within the specific portion of the geographic region based on the anticipated presence of the crop at specific locations; and predict the one or more types of crops growing within each of the given crop boundary locations.
[3]
3. Method, according to claim 1, characterized - by the fact that the determination of the classification of types of crops for each of the specific locations comprises, for each of the specific locations, selecting a predicted predominantly dominant culture from among the types of predicted crops for the respective specific locations, where the predicted dominant crop type is the classification of crop types.
[4]
4. Method, according to claim 3, characterized "by the fact that the determination of the classification of types of crops for each of the specific locations comprises: for each of the specific locations, if the type of predicted predominantly dominant crop is absent , divide the respective specific location into a plurality of subspecific locations and classify each of the respective subspecific locations from the plurality of subspecific locations as a respective culture type of the types of crops expected for the specific location.
[5]
5. Method, according to claim 1, characterized by the fact that it also comprises estimating a crop yield for each of the specific locations based on the classification of crop types determined for the respective specific locations.
[6]
6. Method, according to claim 1, characterized by the fact that it also comprises determining crop management practices for each of the specific locations based on the classification of crop types determined for the respective specific locations.
[7]
7. Method, according to claim 1, characterized - by the fact that the determination of the classification of types of crops for each of the specific locations comprises determining the classification of types of crops up to a resolution of sub-meter in the soil for each of the specific locations.
[8]
8. Method, according to claim 1, characterized by the fact that the prediction of one or more types of crops growing in each of the specific locations comprises applying the set of images to one or more machine learning systems or to a convolutional neural network (CNN).
[9]
9. Method according to claim 8, characterized by the fact that the one or more machine learning systems or CNN is configured to predict the one or more types of crops growing in each of the specific locations after supervised data training true field players.
[10]
10. Method according to claim 9, characterized by the fact that true field data comprises one or more of government crop data, publicly available crop data,
images with crop areas identified with low soil resolution, images with crop types identified with low soil resolution, images with crop limits identified manually, images with crop limits and crop types identified manually, crop research data, sampled crop data, and farmer reports.
[11]
11. Method, according to claim 1, characterized by the fact that the forecast of one or more types of crops growing in each of the specific locations comprises, for each of the specific locations, analyzing the time series images for changes over time. over time of pixels associated with their specific locations, where a pattern of specific pixel change is associated with at least one type of culture.
[12]
12. Method, according to claim 1, characterized by the fact that it also comprises: causing the display of the indicative image of culture on a device accessible by a user; and receive a modification, from the user, of a specific indication from among the indications of the classification of crop types determined for the respective specific locations, where the modification comprises a manual reclassification of the type of crop to the specific location associated with the indication specific.
[13]
13. One or more computer-readable storage media characterized by the fact that they comprise a plurality of instructions for making a device, in response to execution by one or more processors of the device: obtaining a plurality of sets of images associated with a geographic region and a period of time, in which each set of images of the plurality of sets of images comprises multispectral and time series images that represent a respective specific portion of the geographical region during the period of time; predict one or more types of crops growing in each of the specific locations within the specific portion of the geographical region associated with a set of images from the plurality of sets of images; determine a classification of crop types for each of the specific locations based on the one or more types of crops planned for the respective specific locations; and generate an indicative image of culture that comprises at least one image of the multispectral images and time series of the set of overlapping images with indications of the classification of types of cultures determined for the respective specific locations.
[14]
14. Computer-readable storage medium according to claim 13, characterized by the fact that the prediction of one or more types of crops growing in each of the specific locations comprises: predicting the presence of a culture in the specific locations; determine crop boundary locations within the specific portion of the geographic region based on the anticipated presence of the crop at specific locations;
and predict the one or more types of crops growing within each of the given crop boundary locations.
[15]
15. Computer-readable storage medium according to claim 13, characterized by the fact that determining the classification of crop types for each of the specific locations comprises, for each of the specific locations, selecting a predicted predominantly dominant culture among the types of crops foreseen for the respective specific locations, in which the type of cultivation that is predominantly dominant is the classification of types of crops.
[16]
16. Computer-readable storage medium, according to claim 13, characterized by the fact that determining the classification of crop types for each of the specific locations comprises: for each of the specific locations, if the type of culture foreseen mainly dominant is absent, divide the respective specific location into a plurality of subspecific locations and classify each of the respective subspecific locations from the plurality of subspecific locations as a respective culture type from the types of crops planned for the specific location.
[17]
17. Computer-readable storage medium, according to claim 13, characterized by the fact that determining the classification of crop types for each of the specific locations comprises determining the classification of crop types up to a resolution of the sub-meter in the soil for each of the specific locations.
[18]
18. Computer-readable storage medium according to claim 13, characterized by the fact that the prediction of one or more types of cultures growing in each of the specific locations comprises applying the set of images to one or more learning systems machine or a convolutional neural network (CNN).
[19]
19. Computer-readable storage medium according to claim 18, characterized by the fact that the one or more machine learning systems or CNN is configured to predict the one or more types of cultures growing in each of the specific locations after supervised training on true field data.
[20]
20. Computer-readable storage medium according to claim 19, characterized by the fact that true field data comprises one or more of government crop data, publicly available crop data, images with crop areas identified with resolution low soil, images with crop types identified with low soil resolution, images with crop limits identified manually, images with crop limits and crop types identified manually, crop survey data, sampled crop data, and crop reports farmers.
[21]
21. Computer-readable storage medium according to claim 13, characterized in that a first resolution of a first image of the set of images is different from a second resolution of a second image of the set of images, the first resolution it is less than a third resolution of the indicative culture image, and a fourth resolution of at least a portion of the true field data is less than the third resolution of the indicative culture image.
[22]
22. Computer-readable storage medium according to claim 13, characterized by the fact that the prediction of one or more types of crops growing in each of the specific locations comprises, for each of the specific locations, analyzing the images of time series for changes over time of pixels associated with the respective specific locations, where a pattern of specific change of pixels is associated with at least one type of culture.
类似技术:
公开号 | 公开日 | 专利标题
BR112020014942A2|2020-12-08|CLASSIFICATION OF CULTURE TYPES IN IMAGES
Li et al.2015|A 30-year | record of annual urban dynamics of Beijing City derived from Landsat data
Li et al.2014|Object-based land-cover mapping with high resolution aerial photography at a county scale in midwestern USA
Jin et al.2018|Land-cover mapping using Random Forest classification and incorporating NDVI time-series and texture: A case study of central Shandong
O'Neil-Dunne et al.2014|A versatile, production-oriented approach to high-resolution tree-canopy mapping in urban and suburban landscapes using GEOBIA and data fusion
Ural et al.2011|Building population mapping with aerial imagery and GIS data
US10885331B2|2021-01-05|Crop boundary detection in images
Martínez-Casasnovas et al.2005|Mapping multi-year cropping patterns in small irrigation districts from time-series analysis of Landsat TM images
Lu et al.2016|IBM PAIRS curated big data service for accelerated geospatial data analytics and discovery
Hussain et al.2016|Object-based urban land cover classification using rule inheritance over very high-resolution multisensor and multitemporal data
Ellis et al.2019|Object-based delineation of urban tree canopy: Assessing change in Oklahoma City, 2006–2013
Li et al.2012|Object-oriented classification of land use/cover using digital aerial orthophotography
Tormos et al.2012|Object-based image analysis for operational fine-scale regional mapping of land cover within river corridors from multispectral imagery and thematic data
Li et al.2016|An all-season sample database for improving land-cover mapping of Africa with two classification schemes
González-Yebra et al.2018|Methodological proposal to assess plastic greenhouses land cover change from the combination of archival aerial orthoimages and Landsat data
Baker et al.2019|A GIS and object based image analysis approach to mapping the greenspace composition of domestic gardens in Leicester, UK
Namdar et al.2014|Land-use and land-cover classification in semi-arid regions using independent component analysis | and expert classification
Hirata et al.2018|Object-based mapping of aboveground biomass in tropical forests using LiDAR and very-high-spatial-resolution satellite data
Malkin et al.2021|High-resolution land cover change from low-resolution labels: Simple baselines for the 2021 ieee grss data fusion contest
Ramadan et al.2004|Satellite remote sensing for urban growth assessment in Shaoxing City, Zhejiang Province
Jombo et al.2021|Classification of tree species in a heterogeneous urban environment using object-based ensemble analysis and World View-2 satellite imagery
JP7034304B2|2022-03-11|Crop type classification in images
Bekalo2009|Spatial metrics and Landsat data for urban landuse change detection: case of Addis Ababa, Ethiopia.
Hill et al.2014|Land transformation processes in NE China: tracking trade-offs in ecosystem services across several decades with Landsat-TM/ETM+ time series
Lee et al.2020|Evaluation of crop mapping on fragmented and complex slope farmlands through random forest and object-oriented analysis using unmanned aerial vehicles
同族专利:
公开号 | 公开日
US20190228224A1|2019-07-25|
US10909368B2|2021-02-02|
JP2021510880A|2021-04-30|
CA3088641A1|2019-08-01|
EP3743876A1|2020-12-02|
EP3743876A4|2021-10-27|
CN111630551A|2020-09-04|
WO2019147439A1|2019-08-01|
US20210150209A1|2021-05-20|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题

US7916898B2|2003-09-15|2011-03-29|Deere & Company|Method and system for identifying an edge of a crop|
US20070036467A1|2004-07-26|2007-02-15|Coleman Christopher R|System and method for creating a high resolution material image|
US20060287896A1|2005-06-16|2006-12-21|Deere & Company, A Delaware Corporation|Method for providing crop insurance for a crop associated with a defined attribute|
US9152938B2|2008-08-11|2015-10-06|Farmlink Llc|Agricultural machine and operator performance information systems and related methods|
US9084389B2|2009-12-17|2015-07-21|Mapshots, Inc|Multiple field boundary data sets in an automated crop recordkeeping system|
CN102194127B|2011-05-13|2012-11-28|中国科学院遥感应用研究所|Multi-frequency synthetic aperture radar data crop sensing classification method|
US9292797B2|2012-12-14|2016-03-22|International Business Machines Corporation|Semi-supervised data integration model for named entity classification|
US9147132B2|2013-09-11|2015-09-29|Digitalglobe, Inc.|Classification of land based on analysis of remotely-sensed earth images|
WO2015051339A1|2013-10-03|2015-04-09|Farmers Business Network, Llc|Crop model and prediction analytics|
US9858502B2|2014-03-31|2018-01-02|Los Alamos National Security, Llc|Classification of multispectral or hyperspectral satellite imagery using clustering of sparse approximations on sparse representations in learned dictionaries obtained using efficient convolutional sparse coding|
CN105005782A|2014-04-24|2015-10-28|中国科学院遥感与数字地球研究所|Fine method for global vegetation classification based on multi-temporal remote sensing data and spectroscopic data|
JP6213681B2|2014-09-02|2017-10-18|富士通株式会社|Yield amount recording method, yield amount recording program, and yield amount recording apparatus|
US20170161560A1|2014-11-24|2017-06-08|Prospera Technologies, Ltd.|System and method for harvest yield prediction|
CN104952070B|2015-06-05|2018-04-13|中北大学|A kind of corn field remote sensing image segmentation method of class rectangle guiding|
CN104951754A|2015-06-08|2015-09-30|中国科学院遥感与数字地球研究所|Sophisticated crop classifying method based on combination of object oriented technology and NDVI time series|
CN107358214A|2017-07-20|2017-11-17|中国人民解放军国防科学技术大学|Polarization SAR terrain classification method based on convolutional neural networks|US10586105B2|2016-12-30|2020-03-10|International Business Machines Corporation|Method and system for crop type identification using satellite observation and weather data|
US10445877B2|2016-12-30|2019-10-15|International Business Machines Corporation|Method and system for crop recognition and boundary delineation|
US11263707B2|2017-08-08|2022-03-01|Indigo Ag, Inc.|Machine learning in agricultural planting, growing, and harvesting contexts|
US10621434B2|2018-01-25|2020-04-14|International Business Machines Corporation|Identification and localization of anomalous crop health patterns|
US11138677B2|2018-04-24|2021-10-05|Indigo Ag, Inc.|Machine learning in an online agricultural system|
US11195030B2|2018-09-14|2021-12-07|Honda Motor Co., Ltd.|Scene classification|
US11034357B2|2018-09-14|2021-06-15|Honda Motor Co., Ltd.|Scene classification prediction|
US11197417B2|2018-09-18|2021-12-14|Deere & Company|Grain quality control system and method|
US11240961B2|2018-10-26|2022-02-08|Deere & Company|Controlling a harvesting machine based on a geo-spatial representation indicating where the harvesting machine is likely to reach capacity|
US11178818B2|2018-10-26|2021-11-23|Deere & Company|Harvesting machine control system with fill level processing based on yield data|
US11079725B2|2019-04-10|2021-08-03|Deere & Company|Machine control using real-time model|
US11234366B2|2019-04-10|2022-02-01|Deere & Company|Image selection for machine control|
US11238283B2|2019-10-04|2022-02-01|The Climate Corporation|Hybrid vision system for crop land navigation|
US11157811B2|2019-10-28|2021-10-26|International Business Machines Corporation|Stub image generation for neural network training|
CN112287186B|2020-12-24|2021-03-26|北京数字政通科技股份有限公司|Intelligent classification method and system for city management|
法律状态:
2021-12-07| B350| Update of information on the portal [chapter 15.35 patent gazette]|
优先权:
申请号 | 申请日 | 专利标题
US201862620939P| true| 2018-01-23|2018-01-23|
US62/620,939|2018-01-23|
US16/218,305|2018-12-12|
US16/218,305|US10909368B2|2018-01-23|2018-12-12|Crop type classification in images|
PCT/US2019/013704|WO2019147439A1|2018-01-23|2019-01-15|Crop type classification in images|
[返回顶部]